0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag
This article provides a non-technical guide to interpreting SHAP analyses, useful for explaining machine learning models to non-technical stakeholders, with a focus on both local and global interpretability using various visualization methods.
This article introduces interpretable clustering, a field that aims to provide insights into the characteristics of clusters formed by clustering algorithms. It discusses the limitations of traditional clustering methods and highlights the benefits of interpretable clustering in understanding data patterns.
This article explains the concept and use of Friedman's H-statistic for finding interactions in machine learning models.
First / Previous / Next / Last
/ Page 1 of 0